Loading...
 

Gaussian elimination algorithm

The Gaussian elimination algorithm is the most widely-known basic algorithm for solving systems of linear equations.
The elimination algorithm, called the Gauss algorithm, was invented in China in 179. In Europe, the algorithm began to be used in 1771 by Leonard Euler, who called it the most natural way of solving systems of equations. It is wrongly called the Gauss elimination algorithm, from mathematician Carl Friedrich Gauss (1777-1855), who was still alive in 1771 [1].


Let us illustrate the Gaussian elimination algorithm on the example of a matrix generated for five one-dimensional B-splines, for the right side set on a vector of ones, i.e. for

\( \begin{bmatrix} \int{B^x_{1,p}B^x_{1,p}}dx & \int{B^x_{1,p}B^x_{2,p}}dx & \cdots & \int{B^x_{1,p}B^x_{5,p}}dx \\ \int{B^x_{2,p}B^x_{1,p}}dx & \int{B^x_{2,p}B^x_{2,p}}dx & \cdots & \int{B^x_{2,p}B^x_{5,p}}dx\\ \vdots & \vdots & \vdots & \vdots \\ \int{B^x_{5,p}B^x_{1,p}}dx & \int{B^x_{5,p}B^x_{2,p}}dx & \cdots & \int{B^x_{5,p}B^x_{5,p}}dx \\ \end{bmatrix} \begin{bmatrix} u_{1} \\ u_{2} \\ \vdots \\ u_{5} \\ \end{bmatrix} =\begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \\ 1 \end{bmatrix} \)
which after integration, according to the transformations presented in chapter 1, gives a matrix
\( \begin{bmatrix} \frac{1}{20} & \frac{13}{120} & \frac{1}{120} & 0 & 0 \\ \frac{13}{120} & \frac{1}{2} & \frac{13}{60} & \frac{1}{120} & 0 \\ \frac{1}{120} & \frac{13}{60} & \frac{11}{20} & \frac{13}{60} & \frac{1}{120} \\ 0 & \frac{1}{120} & \frac{13}{60} & \frac{1}{2} & \frac{13}{120} \\ 0 & 0 & \frac{1}{20} & \frac{13}{120} & \frac{1}{120} \\ \end{bmatrix} \begin{bmatrix} u_{1} \\ u_{2} \\ u_{3} \\ u_{4} \\ u_{5} \\ \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \\ 1 \end{bmatrix} \)
We start out by converting the matrices to floating point numbers
\( \begin{bmatrix} 0.05 & 0.10833 & 0.00833 & 0 & 0 \\ 0.10833 & 0.5 & 0.21666 & 0.00833 & 0 \\ 0.00833 & 0.21666 & 0.55 & 0.21666 & 0.00833 \\ 0 & 0.00833 & 0.21666 & 0.5 & 0.10833 \\ 0 & 0 & 0.00833 & 0.10833 & 0.05\\ \end{bmatrix} \begin{bmatrix} u_{1} \\ u_{2} \\ u_{3} \\ u_{4} \\ u_{5} \\ \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \\ 1 \end{bmatrix} \)
We continue by scaling the first line so that one is on the diagonal. In other words, we multiply the first row by the inverse of the line on the diagonal. Here we use pseudo code convection in which, on the right-hand side of the substitution, we describe the method of modifying the row, and on the left side we put the rows to which we substitute the result
\( 1^{st}=1^{st}\frac{1}{A_{1,1}}= 1^{st}\frac{1}{0.05}=1^{st}20 \)
\( \begin{bmatrix} {\color{red}20*}0.05 & {\color{red}20*}0.10833 & {\color{red}20*} 0.00833 & {\color{red}20*}0 & {\color{red}20*}0 \\ 0.10833 & 0.5 & 0.21666 & 0.00833 & 0 \\ 0.00833 & 0.21666 & 0.55 & \frac{13}{60} & 0.00833 \\ 0 & 0.00833 & 0.21666 & 0.5 & 0.10833 \\ 0 & 0 & 0.00833 & 0.10833 & 0.05 \\ \end{bmatrix} \begin{bmatrix} u_{1} \\ u_{2} \\ u_{3} \\ u_{4} \\ u_{5} \\ \end{bmatrix} = \begin{bmatrix} {\color{red}20*}1 \\ 1 \\ 1 \\ 1 \\ 1 \end{bmatrix} \)
\( \begin{bmatrix} {\color{red}1} & {\color{red}2.16666} & {\color{red}0.16666} & {\color{red}0} & {\color{red}0} \\ {\color{blue}0.10833} & 0.5 & 0.21666 & 0.00833 & 0 \\ 0.00833 & 0.21666 & 0.55 & \frac{13}{60} & 0.00833 \\ 0 & 0.00833 & 0.21666 & 0.5 & 0.10833 \\ 0 & 0 & 0.00833 & 0.10833 & 0.05\\ \end{bmatrix} \begin{bmatrix} u_{1} \\ u_{2} \\ u_{3} \\ u_{4} \\ u_{5} \\ \end{bmatrix} = \begin{bmatrix} {\color{red}20} \\ 1 \\ 1 \\ 1 \\ 1 \end{bmatrix} \)
Then we subtract the second row from the third row, scaled in such a way as to get the zero diagonal (in the second column). In other words, we perform the subtractions
\( 2^{nd} = 2^{nd} - 1^{st} A_{2,1}= 2^{nd} - 1^{st} *{\color{blue}0.10833} \)
\( \begin{bmatrix} 1 & 2.16666 & 0.16666 & 0 & 0 \\ 0.10833{\color{red}-1*0.10833} & 0.5{\color{red}-2.16666*0.10833} & 0.21666{\color{red}-0.16666*0.10833} & 0.00833{\color{red}-0} & 0{\color{red}-0} \\ 0.00833 & 0.21666 & 0.55 & \frac{13}{60} & 0.00833 \\ 0 & 0.00833 & 0.21666 & 0.5 & 0.10833 \\ 0 & 0 & 0.00833 & 0.10833 & 0.05\\ \end{bmatrix} \\ \begin{bmatrix} u_{1} \\ u_{2} \\ u_{3} \\ u_{4} \\ u_{5} \\ \end{bmatrix} = \begin{bmatrix} 20 \\ 1{\color{red}-20*0.10833} \\ 1 \\ 1 \\ 1 \end{bmatrix} \)
In the second row in the 4th column we subtract 0, in the 5th column we subtract zero from zero
\( \begin{bmatrix} 1 & 2.16666 & 0.16666 & 0 & 0 \\ {\color{red}0} & {\color{red}0.26528} & {\color{red}0.19860} & {\color{red}0.00833} & {\color{red}0} \\ {\color{blue}0.00833} & 0.21666 & 0.55 & \frac{13}{60} & 0.00833 \\ 0 & 0.00833 & 0.21666 & 0.5 & 0.10833 \\ 0 & 0 & 0.00833 & 0.10833 & 0.05 \\ \end{bmatrix} \begin{bmatrix} u_{1} \\ u_{2} \\ u_{3} \\ u_{4} \\ u_{5} \\ \end{bmatrix} = \begin{bmatrix}20 \\ {\color{red}-1.1666} \\ 1 \\ 1 \\ 1 \end{bmatrix} \)
Similarly, we subtract the first row from the third row, scaled in such a way as to get the zero diagonal (in the first column). In other words, we do
\( 3^{rd} = 3^{rd} - 1^{st} A_{3,1}= 3^{rd} - 1^{st} {\color{blue}0.00833} \)
\( \begin{bmatrix} 1 & 2.16666 & 0.16666 & 0 & 0 \\ 0 & 0.26528 & 0.19860 & 0.00833 & 0 \\ 0.00833{\color{red}-1*0.00833} & 0.21666{\color{red}-2.16666*0.00833} & 0.55{\color{red}-0.16666*0.00833} & 0.21666{\color{red}-0} & 0.00833{\color{red}-0} \\ 0 & 0.00833 & 0.21666 & 0.5 & 0.10833 \\ 0 & 0 & 0.00833 & 0.10833 & 0.05 \\ \end{bmatrix} \\ \begin{bmatrix} u_{1} \\ u_{2} \\ u_{3} \\ u_{4} \\ u_{5} \\ \end{bmatrix} = \begin{bmatrix} 20 \\ {\color{red}-1.1666} \\ 1{\color{red}-20*0.00833} \\ 1 \\ 1 \end{bmatrix} \)
\( \begin{bmatrix} 1 & 2.16666 & 0.16666 & 0 & 0 \\ 0 & 0.26528 & 0.19860 & 0.00833 & 0 \\ {\color{red}0} & {\color{red}0.19861} & {\color{red}0.54861} & {\color{red}0.21666} & {\color{red}0.00833} \\ 0 & 0.00833 & 0.21666 & 0.5 & 0.10833\\ 0 & 0 & 0.00833 & 0.10833 & 0.05\\ \end{bmatrix} \begin{bmatrix} u_{1} \\ u_{2} \\ u_{3} \\ u_{4} \\ u_{5} \\ \end{bmatrix} = \begin{bmatrix} 20 \\ -1.1666 \\ {\color{red}0.83333} \\ 1 \\ 1 \end{bmatrix} \)
We interrupt the process of subtracting the first line from the others, because in the following lines (fourth and fifth) there are zeros on the diagonal.
Now, we are going to scale the second row to use it in a moment to generate zeros in the second column. We multiply the second row first by the inverse of the diagonal term
\( 2^{nd} = 2^{nd} \frac{1}{A_{2,2}}=2^{nd}\frac{1}{0.26528}=2^{nd}3,76963 \)
\( \begin{bmatrix} 1 & 2.16666 & 0.16666 & 0 & 0 \\ 0 & 0.26528{\color{red}*3,76963} & 0.19860{\color{red}*3,76963} & 0.00833{\color{red}*3,76963} & 0 \\ 0 & 0.19861 & 0.54861 & 0.21666 & 0.00833 \\ 0 & 0.00833 & 0.21666 & 0.5 & 0.10833 \\ 0 & 0 & 0.00833 & 0.10833 & 0.05 \\ \end{bmatrix} \begin{bmatrix} u_{1} \\ u_{2} \\ u_{3} \\ u_{4} \\ u_{5} \\ \end{bmatrix} = \begin{bmatrix} 20 \\ -1.1666{\color{red}*3,76963} \\ 0.83333 \\ 1 \\ 1 \end{bmatrix} \)
\( \begin{bmatrix} 1 & 2.16666 & 0.16666 & 0 & 0 \\ 0 & {\color{red}1} & {\color{red}0.74864} & {\color{red}0.03140} & 0 \\ 0 & {\color{blue}0.19861} & 0.54861 & 0.21666 & 0.00833 \\ 0 & 0.00833 & 0.21666 & 0.5 & 0.10833\\ 0 & 0 & 0.00833 & 0.10833 & 0.05 \\ \end{bmatrix} \begin{bmatrix} u_{1} \\ u_{2} \\ u_{3} \\ u_{4} \\ u_{5} \\ \end{bmatrix} = \begin{bmatrix} 20 \\ {\color{red}-4.39765} \\ 0.83333 \\ 1 \\ 1 \end{bmatrix} \)
Then we subtract the second row from the third row, scaled in such a way as to get the zero diagonal (in the second column). In other words, we perform the subtractions
\( 3^{rd} = 3^{rd} - 2^{nd} A_{3,2}= 3^{rd} - 2^{st}{\color{blue}0.19861} \)
\( \begin{bmatrix} 1 & 2.16666 & 0.16666 & 0 & 0 \\ 0 & 1 & 0.74864 & 0.03140 & 0 \\ 0 & 0.19861{\color{red}-1*0.19861} & 0.54861{\color{red}-0.74864*0.19861} & 0.21666{\color{red}-0.03140*0.19861} & 0.00833{\color{red}-0} \\ 0 & 0.00833 & 0.21666 & 0.5 & 0.10833 \\ 0 & 0 & 0.00833 & 0.10833 & 0.05 \\ \end{bmatrix} \\ \begin{bmatrix} u_{1} \\ u_{2} \\ u_{3} \\ u_{4} \\ u_{5} \\ \end{bmatrix} = \begin{bmatrix} 20 \\ -4.39765 \\ 0.83333{\color{red}+4.39765*0.19861} \\ 1 \\ 1 \end{bmatrix} \)
\( \begin{bmatrix} 1 & 2.16666 & 0.16666 & 0 & 0 \\ 0 & 1 & 0.74864 & 0.03140 & 0 \\ 0 & {\color{red}0} & {\color{red}0.39992} & {\color{red}0.21042} & {\color{red}0.00833} \\ 0 & {\color{blue}0.00833} & 0.21666 & 0.5 & 0.10833 \\ 0 & 0 & 0.00833 & 0.10833 & 0.05 \\ \end{bmatrix} \begin{bmatrix} u_{1} \\ u_{2} \\ u_{3} \\ u_{4} \\ u_{5} \\ \end{bmatrix} = \begin{bmatrix} 20 \\ -4.39765 \\ {\color{red}1.70674} \\ 1 \\ 1 \end{bmatrix} \)
We now use the second row to generate a zero in the second column in the fourth row. To do this, we subtract the second row scaled by 0.00833
\( 4^{th}=4^{th}-2^{nd}A_{4,2}=4^{th}-2^{nd}{\color{blue}0.00833} \)
\( \begin{bmatrix} 1 & 2.16666 & 0.16666 & 0 & 0 \\ 0 & 1 & 0.74864 & 0.03140 & 0 \\ 0 & 0 & 0.39992 & 0.21042 & 0.00833 \\ 0 & 0.00833{\color{red}-1*0.00833} & 0.21666{\color{red}-0.74864*0.00833} & 0.5{\color{red}-0.03140*0.00833} & 0.10833 {\color{red}-0} \\ 0 & 0 & 0.00833 & 0.10833 & 0.05 \\ \end{bmatrix} \\ \begin{bmatrix} u_{1} \\ u_{2} \\ u_{3} \\ u_{4} \\ u_{5} \\ \end{bmatrix} = \begin{bmatrix}20 \\ -4.39765 \\ 1.70674 \\ 1{\color{red}+4.39765*0.00833} \\ 1 \end{bmatrix} \)
\( \begin{bmatrix} 1 & 2.16666 & 0.16666 & 0 & 0 \\ 0 & 1 & 0.74864 & 0.03140 & 0 \\ 0 & 0 & 0.39992 & 0.21042 & 0.00833 \\ 0 & {\color{red}0} & {\color{red}0.21042} & {\color{red}0.499738} &{\color{red}0.10833} \\ 0 & 0 & 0.00833 & 0.10833 & 0.05 \\ \end{bmatrix} \begin{bmatrix} u_{1} \\ u_{2} \\ u_{3} \\ u_{4} \\ u_{5} \\ \end{bmatrix} = \begin{bmatrix}20 \\ -4.39765 \\ 1.70674 \\ {\color{red}1.03663} \\ 1 \end{bmatrix} \)
We have three zeros in words to generate \( A_{4,3}, A_{5,3}\textrm{ and in }A_{5,4} \)
For this we will use the third row, first scale it so that we get the value 1 on the diagonal
\( 3^{rd}=3^{rd}\frac{1}{A_{3,3}}=3^{rd}\frac{1}{0.39992}=3^{rd}2.50050 \)
\( \begin{bmatrix} 1 & 2.16666 & 0.16666 & 0 & 0 \\ 0 & 1 & 0.74864 & 0.03140 & 0 \\ 0 & 0 & 0.39992{\color{red}*2.50050} & 0.21042{\color{red}*2.50050} & 0.00833{\color{red}*2.50050} \\ 0 & 0 & 0.21042 & 0.499738 & 0.10833 \\ 0 & 0 & 0.00833 & 0.10833 & 0.05 \\ \end{bmatrix} \begin{bmatrix} u_{1} \\ u_{2} \\ u_{3} \\ u_{4} \\ u_{5} \\ \end{bmatrix} = \begin{bmatrix}20 \\ -4.39765 \\ 1.70674{\color{red}*2.50050} \\ 1.03663 \\ 1 \end{bmatrix} \)
\( \begin{bmatrix} 1 & 2.16666 & 0.16666 & 0 & 0 \\ 0 & 1 & 0.74864 & 0.03140 & 0 \\ 0 & 0 & {\color{red}1} & {\color{red}0.52615} & {\color{red}0.02082} \\ 0 & 0 & 0.21042 & 0.499738 & 0.10833 \\ 0 & 0 & 0.00833 & 0.10833 & 0.05 \\ \end{bmatrix} \begin{bmatrix} u_{1} \\ u_{2} \\ u_{3} \\ u_{4} \\ u_{5} \\ \end{bmatrix} = \begin{bmatrix}20 \\ -4.39765 \\ {\color{red}4.2677} \\ 1.03663 \\ 1 \end{bmatrix} \)
Now, the third row can be used to generate the zeros in the third column below the diagonal
\( 4^{rd}=4^{rd}-3^{rd}A_{4,3}=4^{rd}-3^{rd}0.21042 \)
\( \begin{bmatrix} 1 & 2.16666 & 0.16666 & 0 & 0 \\ 0 & 1 & 0.74864 & 0.03140 & 0 \\ 0 & 0 & 1 & 0.52615 & 0.02082 \\ 0 & 0 & 0.21042{\color{red}-1*0.21042} & 0.499738{\color{red}-0.52615*0.21042} & 0.10833{\color{red}-0.02082*0.21042} \\ 0 & 0 & 0.00833 & 0.10833 & 0.05 \\ \end{bmatrix} \nonumber \\ \begin{bmatrix} u_{1} \\ u_{2} \\ u_{3} \\ u_{4} \\ u_{5} \\ \end{bmatrix} = \begin{bmatrix}20 \\ -4.39765 \\ 4.2677 \\ 1.03663{\color{red}-4.2677*0.21042} \\ 1 \end{bmatrix} \)
\( \begin{bmatrix} 1 & 2.16666 & 0.16666 & 0 & 0 \\ 0 & 1 & 0.74864 & 0.03140 & 0 \\ 0 & 0 & 1 & 0.52615 & 0.02082 \\ 0 & 0 & {\color{red}0} & {\color{red}0.38902} & {\color{red}0.10395} \\ 0 & 0 & {\color{blue}0.00833} & 0.10833 & 0.05 \\ \end{bmatrix} \nonumber \\ \begin{bmatrix} u_{1} \\ u_{2} \\ u_{3} \\ u_{4} \\ u_{5} \\ \end{bmatrix} = \begin{bmatrix}20 \\ -4.39765 \\ 4.2677 \\ {\color{red}0.13862} \\ 1 \end{bmatrix} \)
\( 5^{th}=5^{th}-3^{rd}A_{5,3}=5^{th}-3^{rd}{\color{blue}0.00833} \)
\( \begin{bmatrix} 1 & 2.16666 & 0.16666 & 0 & 0 \\ 0 & 1 & 0.74864 & 0.03140 & 0 \\ 0 & 0 & 1 & 0.52615 & 0.02082 \\ 0 & 0 & 0 & 0.38902 & 0.10395 \\ 0 & 0 & 0.00833{\color{red}-1*0.0.00833} & 0.10833{\color{red}-0.52615*0.00833} & 0.05{\color{red}-0.02082*0.00833} \\ \end{bmatrix} \nonumber \\ \begin{bmatrix} u_{1} \\ u_{2} \\ u_{3} \\ u_{4} \\ u_{5} \\ \end{bmatrix} = \begin{bmatrix}20 \\ -4.39765 \\ 4.2677 \\ 0.13862 \\ 1{\color{red}-4.2677*0.00833} \end{bmatrix} \)
\( \begin{bmatrix} 1 & 2.16666 & 0.16666 & 0 & 0 \\ 0 & 1 & 0.74864 & 0.03140 & 0 \\ 0 & 0 & 1 & 0.52615 & 0.02082 \\ 0 & 0 & 0 & 0.38902 & 0.10395 \\ 0 & 0 & {\color{red}0} & {\color{red}0.10394} & {\color{red}0,04982} \\ \end{bmatrix} \nonumber \\ \begin{bmatrix} u_{1} \\ u_{2} \\ u_{3} \\ u_{4} \\ u_{5} \\ \end{bmatrix} = \begin{bmatrix}20 \\ -4.39765 \\ 4.2677 \\ 0.13862 \\ {\color{red}0,96445} \end{bmatrix} \)
The last step is to scale the fourth row
\( 4^{th}=4^{th}\frac{1}{A_{4,4}}=4^{th}\frac{1}{0.38902}=4^{th}2.57056 \)
\( \begin{bmatrix} 1 & 2.16666 & 0.16666 & 0 & 0 \\ 0 & 1 & 0.74864 & 0.03140 & 0 \\ 0 & 0 & 1 & 0.52615 & 0.02082 \\ 0 & 0 & 0 & 0.38902{\color{red}*2.57056} & 0.10395{\color{red}*2.57056} \\ 0 & 0 & 0 & 0.10394 & 0,04982 \\ \end{bmatrix} \nonumber \\ \begin{bmatrix} u_{1} \\ u_{2} \\ u_{3} \\ u_{4} \\ u_{5} \\ \end{bmatrix} = \begin{bmatrix} 20 \\ -4.39765 \\ 4.2677 \\ 0.13862{\color{red}*2.57056} \\ 0,96445\end{bmatrix} \)
\( \begin{bmatrix} 1 & 2.16666 & 0.16666 & 0 & 0 \\ 0 & 1 & 0.74864 & 0.03140 & 0 \\ 0 & 0 & 1 & 0.52615 & 0.02082 \\ 0 & 0 & 0 & {\color{red}1} & {\color{red}0.26721} \\ 0 & 0 & 0 & {\color{blue}0.10394} & 0,04982 \\ \end{bmatrix} \nonumber \\ \begin{bmatrix} u_{1} \\ u_{2} \\ u_{3} \\ u_{4} \\ u_{5} \\ \end{bmatrix} = \begin{bmatrix} 20 \\ -4.39765 \\ 4.2677 \\ {\color{red}0.35633 \n} \\ 0.96445\end{bmatrix} \)
And finally we subtract the fourth row from the fifth so that we get a zero in position
\( A_{5,4} \)
\( 5^{th}=5^{th}-4^{th}{\color{blue}0.10394} \)
\( \begin{bmatrix} 1 & 2.16666 & 0.16666 & 0 & 0 \\ 0 & 1 & 0.74864 & 0.03140 & 0 \\ 0 & 0 & 1 & 0.52615 & 0.02082 \\ 0 & 0 & 0 & 1 & 0.26721 \\ 0 & 0 & 0 & 0.010394{\color{red}-1*0.010394}\ & 0,04982{\color{red}-0.26721*0.010394} \\ \end{bmatrix} \\ \begin{bmatrix} u_{1} \\ u_{2} \\ u_{3} \\ u_{4} \\ u_{5} \\ \end{bmatrix} = \begin{bmatrix}20 \\ -4.39765 \\ 4.2677 \\ 0.35633 \\ 0.96445{\color{red}-2.43385*0.010394}\end{bmatrix} \)
\( \begin{bmatrix} 1 & 2.16666 & 0.16666 & 0 & 0 \\ 0 & 1 & 0.74864 & 0.03140 & 0 \\ 0 & 0 & 1 & 0.52615 & 0.02082 \\ 0 & 0 & 0 & 1 & 0.26721 \\ 0 & 0 & 0 & {\color{red}0}\ & {\color{red}0.04704} \\ \end{bmatrix} \nonumber \\ \begin{bmatrix} u_{1} \\ u_{2} \\ u_{3} \\ u_{4} \\ u_{5} \\ \end{bmatrix} = \begin{bmatrix}20 \\ -4.39765 \\ 4.2677 \\ 0.35633 \\ {\color{red}0.93915}\end{bmatrix} \)
Finally, we can scale the fifth row to have a one on the diagonal
\( 5^{th}=5^{th}-\frac{1}{A_{5,5}}=5^{th}\frac{1}{0,04704}=5^{th}21,2585 \)
\( \begin{bmatrix} 1 & 2.16666 & 0.16666 & 0 & 0 \\ 0 & 1 & 0.74864 & 0.03140 & 0 \\ 0 & 0 & 1 & 0.52615 & 0.02082 \\ 0 & 0 & 0 & 1 & 0.26721 \\ 0 & 0 & 0 & 0\ & 0.047042{\color{red}*21,2585} \\ \end{bmatrix} \nonumber \\ \begin{bmatrix} u_{1} \\ u_{2} \\ u_{3} \\ u_{4} \\ u_{5} \\ \end{bmatrix} = \begin{bmatrix}20 \\ -4.39765 \\ 4.2677 \\ 0.35633 \\ {\color{red}0.93915*21.258}\end{bmatrix} \)
\( \begin{bmatrix} 1 & 2.16666 & 0.16666 & 0 & 0 \\ 0 & 1 & 0.74864 & 0.03140 & 0 \\ 0 & 0 & 1 & 0.52615 & 0.02082 \\ 0 & 0 & 0 & 1 & 0.26721 \\ 0 & 0 & 0 & 0\ & {\color{red}1} \\ \end{bmatrix} \nonumber \\ \begin{bmatrix} u_{1} \\ u_{2} \\ u_{3} \\ u_{4} \\ u_{5} \\ \end{bmatrix} = \begin{bmatrix}20 \\ -4.39765 \\ 4.2677 \\ 0.35633 \\ {\color{red}19.96445}\end{bmatrix} \)
Thus, we have obtained the upper triangular matrix.

The algorithm we performed step-by-step can be summarized as follows:


1 for irow=1,N //loop through rows
2 for icol=irow+1,N // row scalling
3 A(irow,icol)=A(irow,icol)/A(irow,irow)
4 end loop
5 b(irow)=b(irow)/A(irow,irow)
6 A(irow,irow)=1.0
7 for irow2=irow+1,N //row subtractions
8 for icol=irow2+1,N
9 A(irow2,icol) = A(irow2,icol) - A(irow2,irow) * A(irow,icol)
10 end loop
11 b(irow2)=b(irow2)-A(irow2,irow)*b(irow2)
12 A(irow2,irow)=0
13 end loop
14 end loop

The last step is to run the back substitution algorithm to solve our equation system starting with the last equation.


In other words, our system of equations
\( \begin{bmatrix} 1 & 2.16666 & 0.16666 & 0 & 0 \\ 0 & 1 & 0.74864 & 0.03140 & 0 \\ 0 & 0 & 1 & 0.52615 & 0.02082 \\ 0 & 0 & 0 & 1 & 0.26721 \\ 0 & 0 & 0 & 0\ & 1 \\ \end{bmatrix} \nonumber \\ \begin{bmatrix} u_{1} \\ u_{2} \\ u_{3} \\ u_{4} \\ u_{5} \\ \end{bmatrix} = \begin{bmatrix} 20 \\ -4.39765 \\ 4.2677 \\ 0.35633 \\ 19.96445 \end{bmatrix} \)
We write
\( 1u_1+2.16666u_2+0.16666u_3=20 \\ 1u_2+0.74864u_3+0.03140u_4=-4.39765 \\ 1u_3+0.52615u_4+0.02082u_5=4.2677 \\ 1u_4+0.26721u_5=0.35633 \\ 1u_5=19.96445 \)
And we solve from the last equation

\( u_5 = {\color{red}19.96445} \\ u_4 = 0.35633-0.26721u_5=0.35633-0.26721*{\color{red}19.96445} \\ ={\color{blue}-4.97837} \\ u_3 = 4.2677-0.52615u_4-0.02082u_5 \\ =4.2677-0.52615*{\color{blue}(-4.97837)} -0.02082*{\color{red}19.96445}={\color{green}6.47140} \\ u_2 = -4.39765-0.74864u_3-0.03140u_4 \\ =-4.39765-0.74864*{\color{green}6.47140}-0.03140*{\color{blue}(-4.97837)} ={\color{cyan}-2.61466} \\ \nu_1 = 20 - 2.16666u_2-0.16666u_3 \\ =20 - 2.16666*{\color{cyan}(-2.61466)}-0.16666*{\color{green}6.47140}=24.58655 \)
Now let us make the following observations.
If we run, for example, MATLAB (or the free version of OCTAVE) and introduce our matrix A=[1/20 13/120 1/120 0 0 ; 13/120 1/2 13/60 1/120 0; 1/120 13/60 11/20 13/60 1/120; 0 1/120 13/60 1/2 13/120; 0 0 1/120 13/120 1/120]
and the rigth side vector
b=[1 1 1 1 1]
and solve the system of equations with a solver written in MATLAB (by Tim Davis from the University of Florida, where the operation of solving the system of equations is marked with a backslash \) we will get a slightly different solution
x=A\b'
x=42.0588, -10.8824, 9.1176, -10.8824, 42.0588
The reason for this is that we have ignored the so-called pivoting in our algorithm. The Gaussian elimination algorithm works without pivoting when the matrix has some specific properties - that is, if it is symmetric and positively defined.
If the matrix is not symmetric or not positively specified, the non-pivoting Gaussian elimination algorithm works in some cases and does not work in some cases. In our case, the matrix is symmetrical, but it is not positively defined, which is especially manifested by the fact that the words from the diagonal have greater values than the words from outside the diagonal. In our example this is not the case, for example the term \( A_{1,1}=0.05 \) marked in red are smaller than terms, for example \( A_{1,2}=A_{2,1}=0.10833 \) (marked in blue).
\( \begin{bmatrix} {\color{red}0.05} & {\color{blue}0.10833} & 0.00833 & 0 & 0 \\ {\color{blue}0.10833}& 0.5 & 0.21666 & 0.00833 & 0 \\ 0.00833 & 0.21666 & 0.55 & 0.21666 & 0.00833 \\ 0 & 0.00833 & 0.21666 & 0.5 & 0.10833 \\ 0 & 0 & 0.00833 & 0.10833 & 0.05\\ \end{bmatrix} \)
In this case, the Gaussian elimination algorithm will make a mistake by dividing, for example, the first line by a small line on the diagonal \( A_{1,1}=0.05 \) This is because we store our numbers on the computer in an approximate manner, for example truncating everything after the fifth decimal place. To correct our error, add pivoting to the Gaussian elimination algorithm.


The Gaussian elimination algorithm is expensive. This means that for a matrix of size N rows and columns, it will do approximately \( N^3 \) operation. So what is the largest matrix we can process with the Gaussian elimination algorithm in 24 hours? As of today (January 2019), the processor clocks are around 3 GHz. This means that the processor can execute every second \( 3\times 10^9 \) additions or multiplications. Day has \( 24*60*60=86400 \) seconds. So the processor can do \( 86400*10^9=86400000000000 \) addition and multiplication operations during the day. The root of the third degree of this number is \( (86400000000000)^{\frac{1}{3}}\approx44208 \) thus the largest matrix that we can process has a size of 44208. It represents a computational grid with 210 elements in each direction. So, it is a relatively small computational grid. We need better algorithms. Moreover, we conducted all our estimates assuming that the processor is not waiting for the data, but is processing it all the time, and that we have enough RAM and the optimal cache structure of the processor. Usually, the situation is not so ideal, so in practice it is difficult to process with Gaussian elimination matrices larger than 10,000 which corresponds to a 100 by 100 mesh.

If there are multiple right-hand sides, it is necessary to perform Gaussian elimination again for each right-hand side. It is possible to save the numbers by which we multiply the right side, so as to perform the elimination only once, and only update for each new right side. This is done by the LU factorization algorithm.

The Gaussian elimination algorithm does not leave out zeros. Multiplying zeros by nonzero numbers does not generally make sense, it is a waste of time. The Gaussian elimination algorithm can be accelerated by dropping all zeros. If we know a priori where the zeros are in the matrix, we can leave them out and speed up Gaussian elimination. This will be the subject of a frontal and multi-frontal algorithm.

If the matrix has the structure of a Kronecker product of two smaller matrices (which was the case in the example from chapter one), then it is possible to use the alternating directions solver algorithm with linear computational complexity.

If we are satisfied with the result with a certain accuracy, e.g. 0.0001, it is possible to calculate the approximate solution using so-called iterative algorithms. However, these algorithms do not work for all types of matrices.


Ostatnio zmieniona Poniedziałek 27 z Wrzesień, 2021 19:08:19 UTC Autor: Maciej Paszynski
Zaloguj się/Zarejestruj w OPEN AGH e-podręczniki
Czy masz już hasło?

Hasło powinno mieć przynajmniej 8 znaków, litery i cyfry oraz co najmniej jeden znak specjalny.

Przypominanie hasła

Wprowadź swój adres e-mail, abyśmy mogli przesłać Ci informację o nowym haśle.
Dziękujemy za rejestrację!
Na wskazany w rejestracji adres został wysłany e-mail z linkiem aktywacyjnym.
Wprowadzone hasło/login są błędne.